194 research outputs found

    Search Process as Transitions Between Neural States

    Get PDF
    Search is one of the most performed activities on the World Wide Web. Various conceptual models postulate that the search process can be broken down into distinct emotional and cognitive states of searchers while they engage in a search process. These models significantly contribute to our understanding of the search process. However, they are typically based on self-report measures, such as surveys, questionnaire, etc. and therefore, only indirectly monitor the brain activity that supports such a process. With this work, we take one step further and directly measure the brain activity involved in a search process. To do so, we break down a search process into five time periods: a realisation of Information Need, Query Formulation, Query Submission, Relevance Judgment and Satisfaction Judgment. We then investigate the brain activity between these time periods. Using functional Magnetic Resonance Imaging (fMRI), we monitored the brain activity of twenty-four participants during a search process that involved answering questions carefully selected from the TREC-8 and TREC 2001 Q/A Tracks. This novel analysis that focuses on transitions rather than states reveals the contrasting brain activity between time periods – which enables the identification of the distinct parts of the search process as the user moves through them. This work, therefore, provides an important first step in representing the search process based on the transitions between neural states. Discovering more precisely how brain activity relates to different parts of the search process will enable the development of brain-computer interactions that better support search and search interactions, which we believe our study and conclusions advance

    Evaluating Multimodal Driver Displays of Varying Urgency

    Get PDF
    Previous studies have evaluated Audio, Visual and Tactile warnings for drivers, highlighting the importance of conveying the appropriate level of urgency through the signals. However, these modalities have never been combined exhaustively with different urgency levels and tested while using a driving simulator. This paper describes two experiments investigating all multimodal combinations of such warnings along three different levels of designed urgency. The warnings were first evaluated in terms of perceived urgency and perceived annoyance in the context of a driving simulator. The results showed that the perceived urgency matched the designed urgency of the warnings. More urgent warnings were also rated as more annoying but the effect of annoyance was lower compared to urgency. The warnings were then tested for recognition time when presented during a simulated driving task. It was found that warnings of high urgency induced quicker and more accurate responses than warnings of medium and of low urgency. In both studies, the number of modalities used in warnings (one, two or three) affected both subjective and objective responses. More modalities led to higher ratings of urgency and annoyance, with annoyance having a lower effect compared to urgency. More modalities also led to quicker responses. These results provide implications for multimodal warning design and reveal how modalities and modality combinations can influence participant responses during a simulated driving task

    Audiovisual integration of emotional signals from others' social interactions

    Get PDF
    Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity

    “Some like it hot”:spectators who score high on the personality trait openness enjoy the excitement of hearing dancers breathing without music

    Get PDF
    Music is an integral part of dance. Over the last 10 years, however, dance stimuli (without music) have been repeatedly used to study action observation processes, increasing our understanding of the influence of observer’s physical abilities on action perception. Moreover, beyond trained skills and empathy traits, very little has been investigated on how other observer or spectators’ properties modulate action observation and action preference. Since strong correlations have been shown between music and personality traits, here we aim to investigate how personality traits shape the appreciation of dance when this is presented with three different music/sounds. Therefore, we investigated the relationship between personality traits and the subjective esthetic experience of 52 spectators watching a 24 min lasting contemporary dance performance projected on a big screen containing three movement phrases performed to three different sound scores: classical music (i.e., Bach), an electronic sound-score, and a section without music but where the breathing of the performers was audible. We found that first, spectators rated the experience of watching dance without music significantly different from with music. Second, we found that the higher spectators scored on the Big Five personality factor openness, the more they liked the no-music section. Third, spectators’ physical experience with dance was not linked to their appreciation but was significantly related to high average extravert scores. For the first time, we showed that spectators’ reported entrainment to watching dance movements without music is strongly related to their personality and thus may need to be considered when using dance as a means to investigate action observation processes and esthetic preferences

    A Wireless Future: performance art, interaction and the brain-computer interfaces

    Get PDF
    Although the use of Brain-Computer Interfaces (BCIs) in the arts originates in the 1960s, there is a limited number of known applications in the context of real-time audio-visual and mixed-media performances and accordingly the knowledge base of this area has not been developed sufficiently. Among the reasons are the difficulties and the unknown parameters involved in the design and implementation of the BCIs. However today, with the dissemination of the new wireless devices, the field is rapidly growing and changing. In this frame, we examine a selection of representative works and artists, in comparison to the current scientific evidence. We identify important performative and neuroscientific aspects, issues and challenges. A model of possible interactions between the performers and the audience is discussed and future trends regarding liveness and interconnectivity are suggested

    CONSTELLATIONS OF MOVEMENT: An Interactive Visualization of Functional Mapping for Motor Imagery Decoding Research

    Get PDF
    An interactive application visualizing multivariate functional mapping of fMRI data within a 3D structural model of the brain. The application is developed as a proof of concept for the efficacy of interactive 3D visualization for representing research in functional mapping, as well as the potential for Unity 3D game engine’s use as a visualization tool for the complex data involved in the research of functional neural activity

    Differences in fMRI intersubject correlation while viewing unedited and edited videos of dance performance

    Get PDF
    Intersubject Correlation (ISC) analysis of fMRI data provides insight into how continuous streams of sensory stimulation are processed by groups of observers. Although edited movies are frequently used as stimuli in ISC studies, there has been little direct examination of the effect of edits on the resulting ISC maps. In this study we showed 16 observers two audiovisual movie versions of the same dance. In one experimental condition there was a continuous view from a single camera (Unedited condition) and in the other condition there were views from different cameras (Edited condition) that provided close up views of the feet or face and upper body. We computed ISC maps for each condition, as well as created a map that showed the difference between the conditions. The results from the Unedited and Edited maps largely overlapped in the occipital and temporal cortices, although more voxels were found for the Edited map. The difference map revealed greater ISC for the Edited condition in the Postcentral Gyrus, Lingual Gyrus, Precentral Gyrus and Medial Frontal Gyrus, while the Unedited condition showed greater ISC in only the Superior Temporal Gyrus. These findings suggest that the visual changes associated with editing provide a source of correlation in maps obtained from edited film, and highlight the utility of using maps to evaluate the difference in ISC between conditions

    Sticky Hands

    Get PDF

    A dyadic stimulus set of audiovisual affective displays for the study of multisensory, emotional, social interactions

    Get PDF
    We describe the creation of the first multisensory stimulus set that consists of dyadic, emotional, point-light interactions combined with voice dialogues. Our set includes 238 unique clips, which present happy, angry and neutral emotional interactions at low, medium and high levels of emotional intensity between nine different actor dyads. The set was evaluated in a between-design experiment, and was found to be suitable for a broad potential application in the cognitive and neuroscientific study of biological motion and voice, perception of social interactions and multisensory integration. We also detail in this paper a number of supplementary materials, comprising AVI movie files for each interaction, along with text files specifying the three dimensional coordinates of each point-light in each frame of the movie, as well as unprocessed AIFF audio files for each dialogue captured. The full set of stimuli is available to download from: http://​motioninsocial.​com/​stimuli_​set/​

    Improving Motor Imagination with Support of RealTime LORETA Neurofeedback

    No full text
    Recording cortical activity during imagined leg movement is a challenging task due to cortical representation of legs deeper within the central sulcus. Therefore Brain Computer Interface (BCI) studies typically rely on imagined movement of both legs [1]. Activity of deeper cortical structures can be estimated offline from multichannel Electroencephalography (EEG) by using LORETA numerical method [2]. LORETA can also be calculated in real time to provide an instantaneous estimate of brain activity, but currently available solution supports only up to 19 channels (BrainAvatar, BrainMaster, Inc). In this study we propose a custom designed real time LORETA neurofeedback based on multichannel EEG to increase cortical activity at the central sulcus during continuous imagining tapping with one leg only. This strategy could be useful in neurorehabilitation of hemiplegia (i.e. stroke)
    corecore